Orar: Ontology Reasoning via Abstraction Refinement
Orar is a scalable ontology reasoner. Its central technique allows for reducing reasoning of an ontology with a large dataset/ABox to reasoning of a smaller (compressed) abstraction of this ontology. Currently Orar supports the Description Logic Horn SHOIF.
Orar is being developed by Birte Glimm, Yevgeny Kazakov, and Trung Kien Tran with the support from the DFG funded project for processing large amounts of data in ontologies via Abstraction and Refinement.
Software
- Orar is available here under this license
- Source code is available here
- Instructions on how to run Orar are available here
Docker Image
Run the Docker image for Orar with the following command:
docker run -p 4040:4040 orarhub/server:demo
Afterward, you can use the web interface of Orar via the following address in a browser:
localhost:4040/
Note: If the system runs but finished with errors, it is likely that Docker limited the memory for running images and you need to Increase the memory limit. For example, in Docker Desktop for Mac, it can be done by going to Docker--> Preferences-->Advanced and change the memory limit.
Test Data
References
- Birte Glimm, Yevgeny Kazakov, and Trung-Kien Tran. Ontology Materialization by Abstraction Refinement in Horn SHOIF. AAAI 17.
- Birte Glimm, Yevgeny Kazakov, and Trung-Kien Tran. Scalable Reasoning by Abstraction Beyond DL-Lite. RR 2016.
- Birte Glimm, Yevgeny Kazakov, Thorsten Liebig, Trung-Kien Tran, Vincent Vialard. Abstraction Refinement for Ontology Materialization. ISWC 2014.
Contact
email: trung-kien.tran(at)uni-ulm.de
phone: +49 (0)731/50-24166
Postal Address
Trung Kien Tran
University of Ulm
Institute of Artificial Intelligence
D-89069 Ulm